Application of frames in Chebyshev and conjugate gradient methods

Authors

  • E. Afroomand Department of mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan, Iran.
  • H. Jamali Department of mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan, Iran.
Abstract:

‎Given a frame of a separable Hilbert space $H$‎, ‎we present some‎ ‎iterative methods for solving an operator equation $Lu=f$‎, ‎where $L$ is a bounded‎, ‎invertible and symmetric‎ ‎operator on $H$‎. ‎We present some algorithms‎ ‎based on the knowledge of frame bounds‎, ‎Chebyshev method and conjugate gradient method‎, ‎in order to give some‎ ‎approximated solutions to the problem‎. ‎Then we investigate the‎ ‎convergence and optimality of them.

Upgrade to premium to download articles

Sign up to access the full text

Already have an account?login

similar resources

Richardson and Chebyshev Iterative Methods by Using G-frames

In this paper, we design some iterative schemes for solving operator equation $ Lu=f $, where $ L:Hrightarrow H $ is a bounded, invertible and self-adjoint operator on a separable Hilbert space $ H $. In this concern,  Richardson and Chebyshev iterative methods are two outstanding as well as long-standing ones. They can be implemented in different ways via different concepts.In this paper...

full text

Extensions of the Hestenes-Stiefel and Polak-Ribiere-Polyak conjugate gradient methods with sufficient descent property

Using search directions of a recent class of three--term conjugate gradient methods, modified versions of the Hestenes-Stiefel and Polak-Ribiere-Polyak methods are proposed which satisfy the sufficient descent condition. The methods are shown to be globally convergent when the line search fulfills the (strong) Wolfe conditions. Numerical experiments are done on a set of CUTEr unconstrained opti...

full text

Towards Stochastic Conjugate Gradient Methods

The method of conjugate gradients provides a very effective way to optimize large, deterministic systems by gradient descent. In its standard form, however, it is not amenable to stochastic approximation of the gradient. Here we explore a number of ways to adopt ideas from conjugate gradient in the stochastic setting, using fast Hessian-vector products to obtain curvature information cheaply. I...

full text

Conjugate Gradient Methods in Training Neural Networks

Training of artificial neural networks is normally a time consuming task due to iterative search imposed by the implicit nonlinearity of the network behavior. To tackle the supervised learning of multilayer feed forward neural networks, the backpropagation algorithm has been proven to be one of the most successful neural network algorithm. Although backpropagation training has proved to be effi...

full text

My Resources

Save resource for easier access later

Save to my library Already added to my library

{@ msg_add @}


Journal title

volume 43  issue 5

pages  1265- 1279

publication date 2017-10-31

By following a journal you will be notified via email when a new issue of this journal is published.

Hosted on Doprax cloud platform doprax.com

copyright © 2015-2023